78 research outputs found

    Speeding up Cylindrical Algebraic Decomposition by Gr\"obner Bases

    Get PDF
    Gr\"obner Bases and Cylindrical Algebraic Decomposition are generally thought of as two, rather different, methods of looking at systems of equations and, in the case of Cylindrical Algebraic Decomposition, inequalities. However, even for a mixed system of equalities and inequalities, it is possible to apply Gr\"obner bases to the (conjoined) equalities before invoking CAD. We see that this is, quite often but not always, a beneficial preconditioning of the CAD problem. It is also possible to precondition the (conjoined) inequalities with respect to the equalities, and this can also be useful in many cases.Comment: To appear in Proc. CICM 2012, LNCS 736

    Singular ways to search for the Higgs boson

    Get PDF
    The discovery or exclusion of the fundamental standard scalar is a hot topic, given the data of LEP, the Tevatron and the LHC, as well as the advanced status of the pertinent theoretical calculations. With the current statistics at the hadron colliders, the workhorse decay channel, at all relevant H masses, is H to WW, followed by W to light leptons. Using phase-space singularity techniques, we construct and study a plethora of "singularity variables" meant to facilitate the difficult tasks of separating signal and backgrounds and of measuring the mass of a putative signal. The simplest singularity variables are not invariant under boosts along the collider's axes and the simulation of their distributions requires a good understanding of parton distribution functions, perhaps not a serious shortcoming during the boson hunting season. The derivation of longitudinally boost-invariant variables, which are functions of the four charged-lepton observables that share this invariance, is quite elaborate. But their use is simple and they are, in a kinematical sense, optimal.Comment: 19 pages, including 21 figure

    Capital allocation for credit portfolios with kernel estimators

    Full text link
    Determining contributions by sub-portfolios or single exposures to portfolio-wide economic capital for credit risk is an important risk measurement task. Often economic capital is measured as Value-at-Risk (VaR) of the portfolio loss distribution. For many of the credit portfolio risk models used in practice, the VaR contributions then have to be estimated from Monte Carlo samples. In the context of a partly continuous loss distribution (i.e. continuous except for a positive point mass on zero), we investigate how to combine kernel estimation methods with importance sampling to achieve more efficient (i.e. less volatile) estimation of VaR contributions.Comment: 22 pages, 12 tables, 1 figure, some amendment

    On the Regularity Property of Differential Polynomials Modulo Regular Differential Chains

    Get PDF
    International audienceThis paper provides an algorithm which computes the normal form of a rational differential fraction modulo a regular differential chain if, and only if, this normal form exists. A regularity test for polynomials modulo regular chains is revisited in the nondifferential setting and lifted to differential algebra. A new characterization of regular chains is provided

    Improving NFS for the Discrete Logarithm Problem in Non-prime Finite Fields

    Get PDF
    International audienceThe aim of this work is to investigate the hardness of the discrete logarithm problem in fields GF(pn)(p^n) where nn is a small integer greater than 1. Though less studied than the small characteristic case or the prime field case, the difficulty of this problem is at the heart of security evaluations for torus-based and pairing-based cryptography. The best known method for solving this problem is the Number Field Sieve (NFS). A key ingredient in this algorithm is the ability to find good polynomials that define the extension fields used in NFS. We design two new methods for this task, modifying the asymptotic complexity and paving the way for record-breaking computations. We exemplify these results with the computation of discrete logarithms over a field GF(p2)(p^2) whose cardinality is 180 digits (595 bits) long

    Computing Individual Discrete Logarithms Faster in GF(pn)(p^n) with the NFS-DL Algorithm

    Get PDF
    International audienceThe Number Field Sieve (NFS) algorithm is the best known method to compute discrete logarithms (DL) in finite fields Fpn\mathbb{F}_{p^n}, with pp medium to large and n1n \geq 1 small. This algorithm comprises four steps: polynomial selection, relation collection, linear algebra and finally, individual logarithm computation. The first step outputs two polynomials defining two number fields, and a map from the polynomial ring over the integers modulo each of these polynomials to Fpn\mathbb{F}_{p^n}. After the relation collection and linear algebra phases, the (virtual) logarithm of a subset of elements in each number field is known. Given the target element in Fpn\mathbb{F}_{p^n}, the fourth step computes a preimage in one number field. If one can write the target preimage as a product of elements of known (virtual) logarithm, then one can deduce the discrete logarithm of the target. As recently shown by the Logjam attack, this final step can be critical when it can be computed very quickly. But we realized that computing an individual DL is much slower in medium-and large-characteristic non-prime fields Fpn\mathbb{F}_{p^n} with n3n \geq 3, compared to prime fields and quadratic fields Fp2\mathbb{F}_{p^2}. We optimize the first part of individual DL: the \emph{booting step}, by reducing dramatically the size of the preimage norm. Its smoothness probability is higher, hence the running-time of the booting step is much improved. Our method is very efficient for small extension fields with 2n62 \leq n \leq 6 and applies to any n>1n > 1, in medium and large characteristic

    New Complexity Trade-Offs for the (Multiple) Number Field Sieve Algorithm in Non-Prime Fields

    Get PDF
    The selection of polynomials to represent number fields crucially determines the efficiency of the Number Field Sieve (NFS) algorithm for solving the discrete logarithm in a finite field. An important recent work due to Barbulescu et al. builds upon existing works to propose two new methods for polynomial selection when the target field is a non-prime field. These methods are called the generalised Joux-Lercier (GJL) and the Conjugation methods. In this work, we propose a new method (which we denote as A\mathcal{A}) for polynomial selection for the NFS algorithm in fields FQ\mathbb{F}_{Q}, with Q=pnQ=p^n and n>1n>1. The new method both subsumes and generalises the GJL and the Conjugation methods and provides new trade-offs for both nn composite and nn prime. Let us denote the variant of the (multiple) NFS algorithm using the polynomial selection method ``{X} by (M)NFS-{X}. Asymptotic analysis is performed for both the NFS-A\mathcal{A} and the MNFS-A\mathcal{A} algorithms. In particular, when p=LQ(2/3,cp)p=L_Q(2/3,c_p), for cp[3.39,20.91]c_p\in [3.39,20.91], the complexity of NFS-A\mathcal{A} is better than the complexities of all previous algorithms whether classical or MNFS. The MNFS-A\mathcal{A} algorithm provides lower complexity compared to NFS-A\mathcal{A} algorithm; for cp(0,1.12][1.45,3.15]c_p\in (0, 1.12] \cup [1.45,3.15], the complexity of MNFS-A\mathcal{A} is the same as that of the MNFS-Conjugation and for cp(0,1.12][1.45,3.15]c_p\notin (0, 1.12] \cup [1.45,3.15], the complexity of MNFS-A\mathcal{A} is lower than that of all previous methods
    corecore